69 research outputs found

    Visual Servoing Invariant to Changes in Camera Intrinsic Parameters

    Get PDF
    This research report presents a new visual servoing scheme which is invariant to changes in camera intrinsic parameters. Current visual servoing techniques are based on the learning of a reference image with the same camera used during the servoing. With the new method it is possible to position a camera (with eventually varying intrinsic parameters), with respect to a non-planar object, given a «reference image» taken with a completely different camera. The necessary and sufficient conditions for the local asymptotic stability show that the control law is robust in the presence of large calibration errors. Local stability implies that the system can accurately track a path in the invariant space. The path can be chosen such that the camera follows a straight line in the Cartesian space. Simple sufficient conditions are given in order to keep the tracking error bounded. This promising approach has been successfully tested with an eye-in-hand robotic system

    An efficient unified approach to direct image registration of rigid and deformable surfaces

    Get PDF
    Image-based deformations are generally used to align images of deformable objecys moving in the 3D space. For the registration of deformable objects, this assumption has shown to give good results. However it is not satisfying for the registration of images of 3D rigid objects as the underlying structure cannot be directly estimated. The general belief is that obtaining the 3D structure directly is difficult. In this article, we propose a parameterization that is well adapted either to align deformable objects or to recover the structure of 3D objects. Furthermore, the formulation leads to an efficient implementation that can considerably reduce the computational load. Experiments with simulated and real data validate the approach for deformable object registration and 3D structure estimation. The computational efficiency is also compared to standard method

    Direct visual servoing with respect to rigid objects

    Get PDF
    Existing visual servoing techniques which do not need metric information require, on the other hand, prior knowledge about the object's shape and/or the camera's motion. In this paper, we propose a new visual servoing technique which does not require any of them. The method is direct in the sense that: the intensity value of all pixels is used (i.e. we avoid the feature extraction step which introduces errors); and that the proposed control error as well as the control law are fully based on image data (i.e. metric measures are neither required nor estimated). Besides not relying on prior information, the scheme is robust to large errors in the camera's internal parameters. We provide the theoretical proofs that the proposed task function is locally isomorphic to the camera pose, that the approach is motion- and shape-independent, and also that the derived control law ensures local asymptotic stability. Furthermore, the proposed control error allows for simple, smooth, physically valid, singularity-free path planning, which leads to a large domain of convergence for the servoing. The approach is validated through various results using objects of different shapes, large initial displacements as well as large errors in the camera's internal parameters

    Deeper understanding of the homography decomposition for vision-based control

    Get PDF
    The displacement of a calibrated camera between two images of a planar object can be estimated by decomposing a homography matrix. The aim of this document is to propose a new method for solving the homography decomposition problem. This new method provides analytical expressions for the solutions of the problem, instead of the traditional numerical procedures. As a result, expressions of the translation vector, rotation matrix and object-plane normal are explicitly expressed as a function of the entries of the homography matrix. The main advantage of this method is that it will provide a deeper understanding on the homography decomposition problem. For instance, it allows to obtain the relations among the possible solutions of the problem. Thus, new vision-based robot control laws can be designed. For example the control schemes proposed in this report combine the two final solutions of the problem (only one of them being the true one) assuming that there is no a priori knowledge for discerning among them

    An efficient direct method for improving visual slam

    Get PDF
    Abstract-Traditionally in monocular SLAM, interest features are extracted and matched in successive images. Outliers are rejected a posteriori during a pose estimation process, and then the structure of the scene is reconstructed. In this paper, we propose a new approach for computing robustly and simultaneously the 3D camera displacement, the scene structure and the illumination changes directly from image intensity discrepancies. In this way, instead of depending on particular features, all possible image information is exploited. That problem is solved by using an efficient second-order optimization procedure and thus, high convergence rates and large domains of convergence are obtained. Furthermore, a new solution to the visual SLAM initialization problem is given whereby no assumptions are made either about the scene or the camera motion. The proposed approach is validated on experimental and simulated data. Comparisons with existing methods show significant performance improvements

    A New Dense Hybrid Stereo Visual Odometry Approach

    Get PDF
    International audienceVisual odometry is an important part of the perception module of autonomous robots. Recent advances in deep learning approaches have given rise to hybrid visual odometry approaches that combine both deep networks and traditional pose estimation methods. One limitation of deep learning approaches is the availability of ground truth data needed to train the neural networks. For example, it is extremely difficult, if not impossible, to obtain a ground truth dense depth map of the environment to be used for stereo visual odometry. Even if unsupervised training of networks has been investigated, supervised training remains more reliable and robust. In this paper, we propose a new hybrid dense stereo visual odometry approach in which a dense depth map is obtained with a network that is supervised using ground truth poses that can be more easily obtained than ground truth depths maps. The depth map obtained from the neural network is used to warp the current image into the reference frame and the optimal pose is obtained by minimizing a cost function that encodes the similarity between the warped image and the reference image. The experimental results show that the proposed approach, not only improves state-of-theart depth maps estimation networks on some of the standard benchmark datasets, but also outperforms the state-of-the-art visual odometry methods

    Spherical Image Processing for Accurate Visual Odometry with Omnidirectional Cameras

    Get PDF
    International audienceDue to their omnidirectional view, the use of catadioptric cameras is of great interest for robot localization and visual servoing. For simplicity, most vision-based algorithms use image processing tools (e.g. image smoothing) that were designed for perspective cameras. This can be a good approximation when the camera displacement is small with respect to the distance to the observed environment. Otherwise, perspective image processing tools are unable to accurately handle the signal distortion that is induced by the specific geometry of omnidirectional cameras. In this paper, we propose an appropriate spherical image processing for increasing the accuracy of visual odometry estimation. The omnidirectional images are mapped onto a unit sphere and treated in the spherical spectral domain. The spherical image processing take into account the specific geometry of omnidirectional cameras. For example we can design, a more accurate and more repeatable Harris interest point detector. The interest points can be matched between two images with a large baseline in order to accurately estimate the camera motion. We demonstrate with a real experiment the accuracy of the visual odometry obtained using the spherical image processing and the improvement with respect to the use of a standard perspective image processing

    Visual Servoing Invariant to Changes in Camera Intrinsic Parameters

    No full text
    This paper presents a new visual servoing scheme which is invariant to changes in camera intrinsic parameters. Current visual servoing techniques are based on the learning of a reference image with the same camera used during the servoing. With the new method it is possible to position a camera (with eventually varying intrinsic parameters), with respect to a non-planar object, given a "reference image" taken with a completely different camera. The necessary and sufficient conditions for the local asymptotic stability show that the control law is robust in the presence of large calibration errors. Local stability implies that the system can accurately track a path in the invariant space. The path can be chosen such that the camera follows a straight line in the Cartesian space. Simple sufficient conditions are given in order to keep the tracking error bounded. This promising approach has been successfully tested with an eye-in-hand robotic system
    • …
    corecore